Skip to content

fix(ds4): memory spike in sparse pooled attention at 4k+ context#17

Merged
Blaizzy merged 1 commit intoBlaizzy:pc/add-deepseekv4flash-modelfrom
0xClandestine:fix/ds4-ram-usage
Apr 27, 2026
Merged

fix(ds4): memory spike in sparse pooled attention at 4k+ context#17
Blaizzy merged 1 commit intoBlaizzy:pc/add-deepseekv4flash-modelfrom
0xClandestine:fix/ds4-ram-usage

Conversation

@0xClandestine
Copy link
Copy Markdown

@0xClandestine 0xClandestine commented Apr 27, 2026

Summary

  • Replace element-wise broadcast multiply + sum with equivalent matmul in _sparse_pooled_attention
  • The old path materialized a (B, H, L, topk, D) intermediate — at 4k context with H=64, topk=512, D=512 this is ~137 GB per operation (×2)
  • The matmul path computes the same result via (B, L, H, D) @ (B, L, D, topk) using ~0.25 GB

Context

From benchmarks on M3 Ultra (512 GB):

2k  → mem 99.2GB   kv 0.03GB
4k  → mem 222.5GB  kv 0.04GB  ← +123 GB from 2x context

The KV cache barely grows, so the spike is entirely from intermediate tensors in _sparse_pooled_attention which runs during prefill (L > 1) on compress_ratio=4 layers with the indexer.

PR: ml-explore#1192
Issue: ml-explore#1192 (comment)

Test plan

  • Verify numerical equivalence (max diff ~1e-4 in float32, well within noise)
  • Benchmark memory at 4k+ context — should stay close to 2k baseline
  • Confirm generation quality unchanged

The element-wise (q * pooled).sum() path broadcasts a (B,H,L,1,D) tensor
against (B,1,L,topk,D), creating a (B,H,L,topk,D) intermediate. At 4k
context with H=64, topk=512, D=512 this is ~137 GB per operation (x2).

Replace with equivalent matmul: (B,L,H,D) @ (B,L,D,topk) which produces
the (B,L,H,topk) result directly with ~0.25 GB peak memory.
@0xClandestine 0xClandestine changed the title Fix ~274 GB memory spike in sparse pooled attention at 4k+ context fix(ds4): memory spike in sparse pooled attention at 4k+ context Apr 27, 2026
@ivanfioravanti
Copy link
Copy Markdown

It works!!! Testing up to 64K context right now. 16K achieved on M5 Max!
image

@ivanfioravanti
Copy link
Copy Markdown

And it's faster! 42tps! maybe this matmul is using Neural Accelerator!

Copy link
Copy Markdown
Owner

@Blaizzy Blaizzy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks 🚀

@Blaizzy Blaizzy merged commit dd6b92f into Blaizzy:pc/add-deepseekv4flash-model Apr 27, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants